In [1]:
import graphlab
In [2]:
image_train = graphlab.SFrame('image_train_data/')
image_test = graphlab.SFrame('image_test_data/')
In [3]:
graphlab.canvas.set_target('ipynb')
In [4]:
image_train['image'].show()
In [5]:
raw_pixel_model = graphlab.logistic_classifier.create(image_train,target='label',
features=['image_array'])
In [6]:
image_test[0:3]['image'].show()
In [7]:
image_test[0:3]['label']
Out[7]:
In [8]:
raw_pixel_model.predict(image_test[0:3])
Out[8]:
The model makes wrong predictions for all three images.
In [9]:
raw_pixel_model.evaluate(image_test)
Out[9]:
The accuracy of this model is poor, getting only about 46% accuracy.
We only have 2005 data points, so it is not possible to train a deep neural network effectively with so little data. Instead, we will use transfer learning: using deep features trained on the full ImageNet dataset, we will train a simple model on this small dataset.
In [10]:
len(image_train)
Out[10]:
The two lines below allow us to compute deep features. This computation takes a little while, so we have already computed them and saved the results as a column in the data you loaded.
(Note that if you would like to compute such deep features and have a GPU on your machine, you should use the GPU enabled GraphLab Create, which will be significantly faster for this task.)
In [11]:
#deep_learning_model = graphlab.load_model('http://s3.amazonaws.com/GraphLab-Datasets/deeplearning/imagenet_model_iter45')
#image_train['deep_features'] = deep_learning_model.extract_features(image_train)
As we can see, the column deep_features already contains the pre-computed deep features for this data.
In [12]:
image_train.head()
Out[12]:
In [13]:
deep_features_model = graphlab.logistic_classifier.create(image_train,
features=['deep_features'],
target='label')
In [14]:
image_test[0:3]['image'].show()
In [15]:
deep_features_model.predict(image_test[0:3])
Out[15]:
The classifier with deep features gets all of these images right!
In [16]:
deep_features_model.evaluate(image_test)
Out[16]: